This is a term from hypothesis testing when you fail to detect a real effect. This typically happens because the statistical power of your experiment or analysis method is too weak. Often researchers and paper reviewers worry less about a Type II error than a Type I error. This is because if you think something is true wrongly and then base future science on the false belief everything you build on it will potentially also be fallacious. However, Type II errors are also critical. A certain proportion of statistically significant results will occur by chance, so if the statistical power of experiments is too weak and there are too many Type II errors there will be too few true positive results published compared with the false negatives.
Defined on page 86
Used on page 86